Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support MCMC Strategy in Splatfacto #3436

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

yufengzh
Copy link
Contributor

This PR adds the MCMC strategy for Splatfacto. To use, specify the following in the CLI argument:

ns-train splatfacto --pipeline.model.strategy=mcmc

@abrahamezzeddine
Copy link

abrahamezzeddine commented Sep 24, 2024

Hello,

I just tried this and it seems to work. I am however wondering about the huge splats being created and large splats going straight through the model, which creates the illusion of hue artifacts inside the area of interest. See image below;

image

Is that an inherent problem with mcmc in general? Another example with depth map to show how huge splats are just "going through" the model

image

They are several magnitudes larger than sparse cloud itself.

@kerrj
Copy link
Collaborator

kerrj commented Oct 1, 2024

Thanks for the PR! I believe this is missing the opacity + scale regularization in the original MCMC paper, do you think you could add those? that might be a reason for these large gaussians during training

@yufengzh
Copy link
Contributor Author

yufengzh commented Oct 2, 2024

Does gsplat have those regularization terms implemented or not yet?

@CameronFraser
Copy link
Contributor

CameronFraser commented Oct 3, 2024

@yufengzh they are not part of gsplat, there is an example here for how to implement: https://github.com/nerfstudio-project/gsplat/blob/main/examples/simple_trainer.py#L661

  # regularizations
  if cfg.opacity_reg > 0.0:
      loss = (
          loss
          + cfg.opacity_reg
          * torch.abs(torch.sigmoid(self.splats["opacities"])).mean()
      )
  if cfg.scale_reg > 0.0:
      loss = (
          loss
          + cfg.scale_reg * torch.abs(torch.exp(self.splats["scales"])).mean()
      )

@CameronFraser
Copy link
Contributor

@kerrj I see there is a use_scale_regularization config here: https://github.com/nerfstudio-project/nerfstudio/blob/main/nerfstudio/models/splatfacto.py#L195 and then there is a scale_reg loss calculated here: https://github.com/nerfstudio-project/nerfstudio/blob/main/nerfstudio/models/splatfacto.py#L720 but doesn't this need to be applied to the main loss like in gsplat? or does that happen elsewhere?

@kerrj
Copy link
Collaborator

kerrj commented Oct 14, 2024

The existing scale_reg loss is actually a different penalty from the PhysGauss paper, which is unrelated to MCMC. (it's added to the main loss a few lines below when it's added to the loss dict).

@yufengzh gsplat only provides the rasterization interface/splitting strategy, since the loss calculation is done inside splatfacto it'll have to be added here. The loss is pretty straightforward if i recall, it's just penalizing the mean of all scales, and the mean of all opacities (theres an equation in the paper describing it)

@CameronFraser
Copy link
Contributor

@yufengzh are you still working on this PR? excited to see it get merged. Do you need any help with the regularization terms?

@Ben-Mack
Copy link

Ben-Mack commented Nov 3, 2024

@liruilong940607 @jb-ye Can you give some review to this PR? Are there some thing needed to do before merge?
I see many people are really looking forward for MCMC to be implemented into NerfStudio. Hope you guys can review this PR soon.

@CameronFraser
Copy link
Contributor

@Ben-Mack the PR is not complete, the regularlization terms need to be added

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants